抑郁症的心理运动迟缓与二元临床访谈中的语音时机变化有关。在这项工作中,我们研究了自由生活二元相互作用的语音定时特征。除了进行连续监测以补充临床就诊的可能性外,在自由生活条件下进行的研究还可以推断社交特征,例如与抑郁症有关的二元相互作用频率。我们将扬声器计数估计量调整为二元相互作用检测器,特异性为89.5%,在Dihard数据集中的灵敏度为86.1%。使用探测器,我们从32名参与者的多天音频记录中获得了语音定时特征,该记录由13位健康个体,11个患有抑郁症的人和8个患有精神疾病的人组成。没有或轻度抑郁的参与者的二元相互作用频率随着抑郁的严重程度而增加,表明潜在的抑郁症发作标记。但是,中度或重度抑郁症的参与者的二元相互作用频率随着抑郁严重程度的增加而降低。在语音时序特征方面,响应时间与抑郁严重程度有显着的正相关。我们的工作表明了自由生活的音频记录的二元相互作用分析的潜力,以获得抑郁严重程度的标记。
translated by 谷歌翻译
Natural Language Generation (NLG) represents a large collection of tasks in the field of NLP. While many of these tasks have been tackled well by the cross-entropy (CE) loss, the task of dialog generation poses a few unique challenges for this loss function. First, CE loss assumes that for any given input, the only possible output is the one available as the ground truth in the training dataset. In general, this is not true for any task, as there can be multiple semantically equivalent sentences, each with a different surface form. This problem gets exaggerated further for the dialog generation task, as there can be multiple valid responses (for a given context) that not only have different surface forms but are also not semantically equivalent. Second, CE loss does not take the context into consideration while processing the response and, hence, it treats all ground truths with equal importance irrespective of the context. But, we may want our final agent to avoid certain classes of responses (e.g. bland, non-informative or biased responses) and give relatively higher weightage for more context-specific responses. To circumvent these shortcomings of the CE loss, in this paper, we propose a novel loss function, CORAL, that directly optimizes recently proposed estimates of human preference for generated responses. Using CORAL, we can train dialog generation models without assuming non-existence of response other than the ground-truth. Also, the CORAL loss is computed based on both the context and the response. Extensive comparisons on two benchmark datasets show that the proposed methods outperform strong state-of-the-art baseline models of different sizes.
translated by 谷歌翻译